Note: When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher.
Some full text articles may not yet be available without a charge during the embargo (administrative interval).
What is a DOI Number?
Some links on this page may take you to non-federal websites. Their policies may differ from this site.
-
While semi‐autonomous drones are increasingly used for road infrastructure inspection, their insufficient ability to independently handle complex scenarios beyond initial job planning hinders their full potential. To address this, the paper proposes a human–drone collaborative inspection approach leveraging flexible surface electromyography (sEMG) for conveying inspectors' speech guidance to intelligent drones. Specifically, this paper contributes a new data set,sEMGCommands forPilotingDrones (sCPD), and ansEMG‐basedCross‐subjectClassificationNetwork (sXCNet), for both command keyword recognition and inspector identification. sXCNet acquires the desired functions and performance through a synergetic effort of sEMG signal processing, spatial‐temporal‐frequency deep feature extraction, and multitasking‐enabled cross‐subject representation learning. The cross‐subject design permits deploying one unified model across all authorized inspectors, eliminating the need for subject‐dependent models tailored to individual users. sXCNet achieves notable classification accuracies of 98.1% on the sCPD data set and 86.1% on the public Ninapro db1 data set, demonstrating strong potential for advancing sEMG‐enabled human–drone collaboration in road infrastructure inspection.more » « lessFree, publicly-accessible full text available May 28, 2026
-
This work presents a scalable grayscale UV technique for fabricating spatially programmable soft actuators with diverse actuation behaviors in one actuator. The advanced programmability lays the foundation for soft robotics and adaptive devices.more » « lessFree, publicly-accessible full text available January 1, 2026
-
Silent speech interfaces have been pursued to restore spoken communication for individuals with voice disorders and to facilitate intuitive communications when acoustic-based speech communication is unreliable, inappropriate, or undesired. However, the current methodology for silent speech faces several challenges, including bulkiness, obtrusiveness, low accuracy, limited portability, and susceptibility to interferences. In this work, we present a wireless, unobtrusive, and robust silent speech interface for tracking and decoding speech-relevant movements of the temporomandibular joint. Our solution employs a single soft magnetic skin placed behind the ear for wireless and socially acceptable silent speech recognition. The developed system alleviates several concerns associated with existing interfaces based on face-worn sensors, including a large number of sensors, highly visible interfaces on the face, and obtrusive interconnections between sensors and data acquisition components. With machine learning-based signal processing techniques, good speech recognition accuracy is achieved (93.2% accuracy for phonemes, and 87.3% for a list of words from the same viseme groups). Moreover, the reported silent speech interface demonstrates robustness against noises from both ambient environments and users’ daily motions. Finally, its potential in assistive technology and human–machine interactions is illustrated through two demonstrations – silent speech enabled smartphone assistants and silent speech enabled drone control.more » « less
-
Abstract Dysphagia or difficulty swallowing is caused by the failure of neurological pathways to properly activate swallowing muscles. Current electromyography (EMG) systems for dysphagia monitoring are bulky and rigid, limiting their potential for long‐term and unobtrusive use. To address this, a machine learning‐assisted wearable EMG system is presented, utilizing self‐adhesive, skin‐conformal, semi‐transparent, and robust ionic gel electrodes. The presented electrodes possess good conductivity, superior skin contact, and good transmittance, ensuring high‐fidelity EMG sensing without impeding daily activities. Moreover, the optimized material and structural designs ensure wearing comfort and conformable skin‐electrode contact, allowing for long‐term monitoring with high accuracy. Machine learning and mel‐frequency cepstral coefficient techniques are employed to classify swallowing events based on food types and volumes. Through an analysis of electrode placement on the chin and neck, the proposed system is able to effectively distinguish between different food types and water volumes using a small number of channels, making it suitable for continuous dysphagia monitoring. This work represents an advancement in machine learning assisted EMG systems for the classification and regression of swallowing events, paving the way for more efficient, unobtrusive, and long‐term dysphagia monitoring systems.more » « less
-
Abstract Silent speech interfaces offer an alternative and efficient communication modality for individuals with voice disorders and when the vocalized speech communication is compromised by noisy environments. Despite the recent progress in developing silent speech interfaces, these systems face several challenges that prevent their wide acceptance, such as bulkiness, obtrusiveness, and immobility. Herein, the material optimization, structural design, deep learning algorithm, and system integration of mechanically and visually unobtrusive silent speech interfaces are presented that can realize both speaker identification and speech content identification. Conformal, transparent, and self‐adhesive electromyography electrode arrays are designed for capturing speech‐relevant muscle activities. Temporal convolutional networks are employed for recognizing speakers and converting sensing signals into spoken content. The resulting silent speech interfaces achieve a 97.5% speaker classification accuracy and 91.5% keyword classification accuracy using four electrodes. The speech interface is further integrated with an optical hand‐tracking system and a robotic manipulator for human‐robot collaboration in both assembly and disassembly processes. The integrated system achieves the control of the robot manipulator by silent speech and facilitates the hand‐over process by hand motion trajectory detection. The developed framework enables natural robot control in noisy environments and lays the ground for collaborative human‐robot tasks involving multiple human operators.more » « less
-
Abstract Lip‐reading provides an effective speech communication interface for people with voice disorders and for intuitive human–machine interactions. Existing systems are generally challenged by bulkiness, obtrusiveness, and poor robustness against environmental interferences. The lack of a truly natural and unobtrusive system for converting lip movements to speech precludes the continuous use and wide‐scale deployment of such devices. Here, the design of a hardware–software architecture to capture, analyze, and interpret lip movements associated with either normal or silent speech is presented. The system can recognize different and similar visemes. It is robust in a noisy or dark environment. Self‐adhesive, skin‐conformable, and semi‐transparent dry electrodes are developed to track high‐fidelity speech‐relevant electromyogram signals without impeding daily activities. The resulting skin‐like sensors can form seamless contact with the curvilinear and dynamic surfaces of the skin, which is crucial for a high signal‐to‐noise ratio and minimal interference. Machine learning algorithms are employed to decode electromyogram signals and convert them to spoken words. Finally, the applications of the developed lip‐reading system in augmented reality and medical service are demonstrated, which illustrate the great potential in immersive interaction and healthcare applications.more » « less
An official website of the United States government
